20 research outputs found

    Reuse It Or Lose It: More Efficient Secure Computation Through Reuse of Encrypted Values

    Full text link
    Two-party secure function evaluation (SFE) has become significantly more feasible, even on resource-constrained devices, because of advances in server-aided computation systems. However, there are still bottlenecks, particularly in the input validation stage of a computation. Moreover, SFE research has not yet devoted sufficient attention to the important problem of retaining state after a computation has been performed so that expensive processing does not have to be repeated if a similar computation is done again. This paper presents PartialGC, an SFE system that allows the reuse of encrypted values generated during a garbled-circuit computation. We show that using PartialGC can reduce computation time by as much as 96% and bandwidth by as much as 98% in comparison with previous outsourcing schemes for secure computation. We demonstrate the feasibility of our approach with two sets of experiments, one in which the garbled circuit is evaluated on a mobile device and one in which it is evaluated on a server. We also use PartialGC to build a privacy-preserving "friend finder" application for Android. The reuse of previous inputs to allow stateful evaluation represents a new way of looking at SFE and further reduces computational barriers.Comment: 20 pages, shorter conference version published in Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, Pages 582-596, ACM New York, NY, US

    Synthpop++: A Hybrid Framework for Generating A Country-scale Synthetic Population

    Full text link
    Population censuses are vital to public policy decision-making. They provide insight into human resources, demography, culture, and economic structure at local, regional, and national levels. However, such surveys are very expensive (especially for low and middle-income countries with high populations, such as India), time-consuming, and may also raise privacy concerns, depending upon the kinds of data collected. In light of these issues, we introduce SynthPop++, a novel hybrid framework, which can combine data from multiple real-world surveys (with different, partially overlapping sets of attributes) to produce a real-scale synthetic population of humans. Critically, our population maintains family structures comprising individuals with demographic, socioeconomic, health, and geolocation attributes: this means that our ``fake'' people live in realistic locations, have realistic families, etc. Such data can be used for a variety of purposes: we explore one such use case, Agent-based modelling of infectious disease in India. To gauge the quality of our synthetic population, we use both machine learning and statistical metrics. Our experimental results show that synthetic population can realistically simulate the population for various administrative units of India, producing real-scale, detailed data at the desired level of zoom -- from cities, to districts, to states, eventually combining to form a country-scale synthetic population.Comment: 9 pages, 6 figures, Accepted for oral presentation at AI4ABM workshop at ICLR 202

    Tight Short-Lived Signatures

    Full text link
    A Time-lock puzzle (TLP) sends information into the future: a predetermined number of sequential computations must occur (i.e., a predetermined amount of time must pass) to retrieve the information, regardless of parallelization. Buoyed by the excitement around secure decentralized applications and cryptocurrencies, the last decade has witnessed numerous constructions of TLP variants and related applications (e.g., cost-efficient blockchain designs, randomness beacons, e-voting, etc.). In this poster, we first extend the notion of TLP by formally defining the "time-lock public key encryption" (TLPKE) scheme. Next, we introduce and construct a "tight short-lived signatures" scheme using our TLPKE. Furthermore, to test the validity of our proposed schemes, we do a proof-of-concept implementation and run detailed simulations

    ContextCLIP: Contextual Alignment of Image-Text pairs on CLIP visual representations

    Full text link
    State-of-the-art empirical work has shown that visual representations learned by deep neural networks are robust in nature and capable of performing classification tasks on diverse datasets. For example, CLIP demonstrated zero-shot transfer performance on multiple datasets for classification tasks in a joint embedding space of image and text pairs. However, it showed negative transfer performance on standard datasets, e.g., BirdsNAP, RESISC45, and MNIST. In this paper, we propose ContextCLIP, a contextual and contrastive learning framework for the contextual alignment of image-text pairs by learning robust visual representations on Conceptual Captions dataset. Our framework was observed to improve the image-text alignment by aligning text and image representations contextually in the joint embedding space. ContextCLIP showed good qualitative performance for text-to-image retrieval tasks and enhanced classification accuracy. We evaluated our model quantitatively with zero-shot transfer and fine-tuning experiments on CIFAR-10, CIFAR-100, Birdsnap, RESISC45, and MNIST datasets for classification task.Comment: 11 Pages, 7 Figures, 2 Tables, ICVGI

    SEM-CS: Semantic CLIPStyler for Text-Based Image Style Transfer

    Full text link
    CLIPStyler demonstrated image style transfer with realistic textures using only the style text description (instead of requiring a reference style image). However, the ground semantics of objects in style transfer output is lost due to style spillover on salient and background objects (content mismatch) or over-stylization. To solve this, we propose Semantic CLIPStyler (Sem-CS) that performs semantic style transfer. Sem-CS first segments the content image into salient and non-salient objects and then transfers artistic style based on a given style text description. The semantic style transfer is achieved using global foreground loss (for salient objects) and global background loss (for non-salient objects). Our empirical results, including DISTS, NIMA and user study scores, show that our proposed framework yields superior qualitative and quantitative performance.Comment: 11 Pages, 4 Figures, 2 Table

    Trenchcoat: Human-Computable Hashing Algorithms for Password Generation

    Full text link
    The average user has between 90-130 online accounts, and around 3×10113 \times 10^{11} passwords are in use this year. Most people are terrible at remembering "random" passwords, so they reuse or create similar passwords using a combination of predictable words, numbers, and symbols. Previous password-generation or management protocols have imposed so large a cognitive load that users have abandoned them in favor of insecure yet simpler methods (e.g., writing them down or reusing minor variants). We describe a range of candidate human-computable "hash" functions suitable for use as password generators - as long as the human (with minimal education assumptions) keeps a single, easily-memorizable "master" secret - and rate them by various metrics, including effective security. These functions hash master-secrets with user accounts to produce sub-secrets that can be used as passwords; FR(F_R(s,w)⟶y, w) \longrightarrow y, takes a website ww, produces a password yy, parameterized by master secret ss, which may or may not be a string. We exploit the unique configuration RR of each user's associative and implicit memory (detailed in section 2) to ensure that sources of randomness unique to each user are present in each master-secret FRF_R. An adversary cannot compute or verify FRF_R efficiently since RR is unique to each individual; in that sense, our hash function is similar to a physically unclonable function. For the algorithms we propose, the user need only complete primitive operations such as addition, spatial navigation or searching. Critically, most of our methods are also accessible to neurodiverse, or cognitively or physically differently-abled persons. We present results from a survey (n=134 individuals) investigating real-world usage of these methods and how people currently come up with their passwords, we also survey 400 websites to collate current password advice

    Fast and Secure Oblivious Stable Matching over Arithmetic Circuits

    Get PDF
    The classic stable matching algorithm of Gale and Shapley (American Mathematical Monthly \u2769) and subsequent variants such as those by Roth (Mathematics of Operations Research \u2782) and Abdulkadiroglu et al. (American Economic Review \u2705) have been used successfully in a number of real-world scenarios, including the assignment of medical-school graduates to residency programs, New York City teenagers to high schools, and Norwegian and Singaporean students to schools and universities. However, all of these suffer from one shortcoming: in order to avoid strategic manipulation, they require all participants to submit their preferences to a trusted third party who performs the computation. In some sensitive application scenarios, there is no appropriate (or cost-effective) trusted party. This makes stable matching a natural candidate for secure computation. Several approaches have been proposed to overcome this, based on secure multiparty computation (MPC), fully homomorphic encryption, etc.; many of these protocols are slow and impractical for real-world use. We propose a novel primitive for privacy-preserving stable matching using MPC (i.e., arithmetic circuits, for any number of parties). Specifically, we discuss two variants of oblivious stable matching and describe an improved oblivious stable matching on the random memory access model based on lookup tables. To explore and showcase the practicality of our proposed primitive, we present detailed benchmarks (at various problem sizes) of our constructions using two popular frameworks: SCALE-MAMBA and MP-SPDZ

    India’s “Aadhaar” Biometric ID: Structure, Security, and Vulnerabilities

    Get PDF
    India\u27s Aadhaar is the largest biometric identity system in history, designed to help deliver subsidies, benefits, and services to India\u27s 1.4 billion residents. The Unique Identification Authority of India (UIDAI) is responsible for providing each resident (not each citizen) with a distinct identity - a 12-digit Aadhaar number - using their biometric and demographic details. We provide the first comprehensive description of the Aadhaar infrastructure, collating information across thousands of pages of public documents and releases, as well as direct discussions with Aadhaar developers. Critically, we describe the first known cryptographic issue within the system, and discuss how a workaround prevents it from being exploitable at scale. Further, we categorize and rate various security and privacy limitations and the corresponding threat actors, examine the legitimacy of alleged security breaches, and discuss improvements and mitigation strategies
    corecore